14 research outputs found

    Sarp Net: A Secure, Anonymous, Reputation-Based, Peer-To-Peer Network

    Get PDF
    Since the advent of Napster, the idea of peer-to-peer (P2P) architectures being applied to file-sharing applications has become popular, spawning other P2P networks like Gnutella, Morpheus, Kazaa, and BitTorrent. This growth in P2P development has nearly eradicated the idea of the traditional client-server structure in the file-sharing model, now placing emphasizes on faster query processing, deeper levels of decentralism, and methods to protect against copyright law violation. SARP Net is a secure, anonymous, decentralized, P2P overlay network that is designed to protect the activity of its users in its own file-sharing community. It is secure in the fact that public-key encryption is used to guard eavesdroppers during messages. The protocol guarantees user anonymity by incorporating message hopping from node to node to prevent any network observer from pinpointing the origin of any file query or shared-file source. To further enhance the system\u27s security, a reputation scheme is incorporated to police nodes from malicious activity, maintain the overlay\u27s topology, and enforce rules to protect node identity

    Complementary Layered Learning

    Get PDF
    Layered learning is a machine learning paradigm used to develop autonomous robotic-based agents by decomposing a complex task into simpler subtasks and learns each sequentially. Although the paradigm continues to have success in multiple domains, performance can be unexpectedly unsatisfactory. Using Boolean-logic problems and autonomous agent navigation, we show poor performance is due to the learner forgetting how to perform earlier learned subtasks too quickly (favoring plasticity) or having difficulty learning new things (favoring stability). We demonstrate that this imbalance can hinder learning so that task performance is no better than that of a suboptimal learning technique, monolithic learning, which does not use decomposition. Through the resulting analyses, we have identified factors that can lead to imbalance and their negative effects, providing a deeper understanding of stability and plasticity in decomposition-based approaches, such as layered learning. To combat the negative effects of the imbalance, a complementary learning system is applied to layered learning. The new technique augments the original learning approach with dual storage region policies to preserve useful information from being removed from an agent’s policy prematurely. Through multi-agent experiments, a 28% task performance increase is obtained with the proposed augmentations over the original technique

    Evolving A Non-Playable Character Team With Layered Learning

    No full text
    Layered Learning is an iterative machine learning technique used to train agents how to perform tasks. The technique decomposes a task into simpler components and trains the agent to learn how to perform progressively more complex sub-tasks to solve the overall task. Layered Learning has been successfully used to instruct computer programs to solve Boolean-logic problems, teach robots how to walk, and train RoboCup soccer playing agents. The proposed work answers the question of how well does Layered Learning apply to the evolved development of a heterogeneous team of Non-playable Characters (NPCs) in a video game. The work compares the use of Layered Learning against evolving NPCs with monolithic based approaches. Experiment data show that Layered Learning can result in the successful development of NPCs and demonstrates that the approach performs well against monolithic evaluation. © 2011 IEEE

    Mitigating Catastrophic Forgetting with Complementary Layered Learning

    No full text
    Catastrophic forgetting is a stability–plasticity imbalance that causes a machine learner to lose previously gained knowledge that is critical for performing a task. The imbalance occurs in transfer learning, negatively affecting the learner’s performance, particularly in neural networks and layered learning. This work proposes a complementary learning technique that introduces long- and short-term memory to layered learning to reduce the negative effects of catastrophic forgetting. In particular, this work proposes the dual memory system in the non-neural network approaches of evolutionary computation and Q-learning instances of layered learning because these techniques are used to develop decision-making capabilities for physical robots. Experiments evaluate the new learning augmentation in a multi-agent system simulation, where autonomous unmanned aerial vehicles learn to collaborate and maneuver to survey an area effectively. Through these direct-policy and value-based learning experiments, the proposed complementary layered learning is demonstrated to significantly improve task performance over standard layered learning, successfully balancing stability and plasticity

    A Demonstration Of Stability-Plasticity Imbalance In Multi-Agent, Decomposition-Based Learning

    No full text
    Layered learning is a machine learning paradigm used in conjunction with direct-policy search reinforcement learning methods to find high performance agent behaviors for complex tasks. At its core, layered learning is a decomposition-based paradigm that shares many characteristics with robot shaping, transfer learning, hierarchical decomposition, and incremental learning. Previous studies have provided evidence that layered learning has the ability to outperform standard monolithic methods of learning in many cases. The dilemma of balancing stability and plasticity is a common problem in machine learning that causes learning agents to compromise between retaining learned information to perform a task with new incoming information. Although existing work implies that there is a stability-plasticity imbalance that greatly limits layered learning agents\u27 ability to learn optimally, no work explicitly verifies the existence of the imbalance or its causes. This work investigates the stability-plasticity imbalance and demonstrates that indeed, layered learning heavily favors plasticity, which can cause learned subtask proficiency to be lost when new tasks are learned. We conclude by identifying potential causes of the imbalance in layered learning and provide high level advice about how to mitigate the imbalance\u27s negative effects

    An Analysis Of Increased Vertical Scaling In Three-Dimensional Virtual World Simulation

    No full text
    In this paper, we describe the analysis of the effect of vertical computational scaling on the performance of a simulation based training prototype currently under development by the U.S. Army Research Laboratory. The United States military is interested in facilitating Warfighter training by investigating large-scale realistic virtual operational environments. In order to support expanded training at higher echelons, virtual world simulators need to scale to support more simultaneous client connections, more intelligent agents, and more physics interactions. This work provides an in-depth analysis of a virtual world simulator under different hardware profiles to determine the effect of increased vertical computational scaling

    Human Entities\u27 Effect On Server Performance, In Distributed Virtual World Training

    No full text
    The use of virtual worlds for training continues to expand in the military as advancements in, simulation technology have enabled more efficient and effective simulation-based training. One of the potential, major advantages of virtual world training is the ability to support collective training without the need for, individuals to be physically co-located with each other. In order for distributed military collective training to, become a reality however, the virtual world simulation architecture must be able to support distributed, synchronous and non-deterministic training. This paper is a continuation of our research in which we attempt to, optimize the Military OpenSimulator Enterprise Strategy (MOSES) server architecture in order to support, collective, distributed training. In this paper, we examine the effect that the number of human users have on the, server\u27s processing memory so as to support our development of a predictive model that determines how many, resources are required to support a target number of concurrent users in the virtual world. We discovered a, statistically significant difference in the amount of processing memory utilized based upon server hardware, configuration. Furthermore, we found a positive, linear association between the number of human avatars in the, virtual world and the amount of processing memory required by the server. These observations allow virtual, world designers and administrators to know the resource demands associated with each human-user. The, results of this paper confirm our hypotheses and provide further insight into optimizing the server architecture, to support virtual world training

    Physics Engine Threading Design And Object-Scalability In Virtual Simulation

    No full text
    The U.S. Army Research Laboratory (ARL) is investigating technologies and methods to enhance the next generation of tactical simulation-based trainers. A primary research objective is to increase the number of simultaneous Soldiers that can train and collaborate in a shared, virtual environment. Current virtual programs of record cannot support the Department of the Army\u27s goal to train at the company echelon (200 Soldiers) in a virtual environment and are limited to the platoon echelon (42 Soldiers) of concurrent trainees. ARL has identified scalability limiting factors to be the simulator\u27s physics engine and threading architecture. In this work, two threading designs are evaluated on how they perform with high amounts of physics load to determine which thread design is optimal for future virtual trainers

    Vertical Scalability Benchmarking In Three-Dimensional Virtual World Simulation

    No full text
    The United States military is investigating large-scale, realistic virtual world simulations to facilitate warfighter training. As the simulation community strives towards meeting these military training objectives, methods must be developed and validated that measure scalability performance in these virtual world simulators. With such methods, the simulation community will be able to quantifiably compare scalability performance between system changes. This work contributes to the development and validation prerequisite by evaluating the effectiveness of commonly used system metrics to measure scalability in a three-dimensional virtual trainer. Specifically, the metrics of CPU utilization and simulation frames per second are evaluated for their effectiveness in vertical scalability benchmarking

    Resource Allocation Predictive Modeling To Optimize Virtual World Simulator Performance

    No full text
    Virtual world simulation for military training is an emerging domain. As such, detailed analysis is required to optimize the performance the simulators. Unfortunately, due to a lack of extensive virtual world performance analysis, simulator administrators often make arbitrary resource allocations to support their environments and training scenarios. In this paper, we provide a lightweight predictive model that will be used in an automated, dynamic resource allocation system in the popular three-dimensional open-sourced virtual world simulator OpenSimulator. Prior to this investigation, only OpenSimulator developers and users with extensive experience with the platform could manually load balance the server resources based on anticipated usage. Now, with the proposed system and its predictive model, the simulator advances towards having an automated mechanism to determine the minimal critical resources that are required to support a target number of concurrent users in the virtual world
    corecore